1,672 research outputs found

    Speeding-up model-based fault injection of deep-submicron CMOS fault models through dynamic and partially reconfigurable FPGAS

    Full text link
    Actualmente, las tecnologías CMOS submicrónicas son básicas para el desarrollo de los modernos sistemas basados en computadores, cuyo uso simplifica enormemente nuestra vida diaria en una gran variedad de entornos, como el gobierno, comercio y banca electrónicos, y el transporte terrestre y aeroespacial. La continua reducción del tamaño de los transistores ha permitido reducir su consumo y aumentar su frecuencia de funcionamiento, obteniendo por ello un mayor rendimiento global. Sin embargo, estas mismas características que mejoran el rendimiento del sistema, afectan negativamente a su confiabilidad. El uso de transistores de tamaño reducido, bajo consumo y alta velocidad, está incrementando la diversidad de fallos que pueden afectar al sistema y su probabilidad de aparición. Por lo tanto, existe un gran interés en desarrollar nuevas y eficientes técnicas para evaluar la confiabilidad, en presencia de fallos, de sistemas fabricados mediante tecnologías submicrónicas. Este problema puede abordarse por medio de la introducción deliberada de fallos en el sistema, técnica conocida como inyección de fallos. En este contexto, la inyección basada en modelos resulta muy interesante, ya que permite evaluar la confiabilidad del sistema en las primeras etapas de su ciclo de desarrollo, reduciendo por tanto el coste asociado a la corrección de errores. Sin embargo, el tiempo de simulación de modelos grandes y complejos imposibilita su aplicación en un gran número de ocasiones. Esta tesis se centra en el uso de dispositivos lógicos programables de tipo FPGA (Field-Programmable Gate Arrays) para acelerar los experimentos de inyección de fallos basados en simulación por medio de su implementación en hardware reconfigurable. Para ello, se extiende la investigación existente en inyección de fallos basada en FPGA en dos direcciones distintas: i) se realiza un estudio de las tecnologías submicrónicas existentes para obtener un conjunto representativo de modelos de fallos transitoriosAndrés Martínez, DD. (2007). Speeding-up model-based fault injection of deep-submicron CMOS fault models through dynamic and partially reconfigurable FPGAS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1943Palanci

    Facilitando la autorregulación del aprendizaje en el diseño de sistemas digitales

    Full text link
    [EN] The software development paradigm students are used to is completely different to the one that must be followed when facing the challenge of specifying and implementing digital systems (hardware). The change in the perspective of how to tackle practical problems in this context could be quite troublesome for students, specially when they lack tools that ease the selfregulation of their cognitive processes. This paper adapts the self-regulated learning model by Winne and Hadwin to the context of designing digital systems. This model is based on continuous monitoring and controlling one’s learning process. Scaffolding and prompting techniques are essential to facilitate the development of strategies for self-regulation. This paper presents the defined planning , its initial deployment, and the partial results obtained.[ES] El paradigma al que el alumnado está acostumbrado para el desarrollo de software cambia completamente cuando debe enfrentarse al reto de especificar e implementar sistemas digitales (hardware). Este cambio de concepción y forma de abordar problemas prácticos en este contexto puede representar un escollo importante para el alumnado, especialmente cuando no dispone de herramientas que faciliten la autorregulación de su proceso cognitivo. Esta comunicación presenta una adaptación al contexto del diseño de sistemas digitales del modelo de aprendizaje autorregulado de Winne y Hadwin, basado en la monitorización continua y el control del propio proceso de aprendizaje. El uso de técnicas de andamiaje y apunte son esenciales para facilitar el desarrollo de estrategias de autorregulación. Se presenta la planificación realizada, su despliegue inicial y los resultados parciales obtenidos hasta la fecha.Los autores agradecen la financiación recibida por el Vicerrectorado de Estudios, Calidad y Acreditación de la Universitat Politècnica de València para desarrollar el Proyecto de Innovación y Mejora Educativa “Comunidades de Aprendizaje como servicios en la nube para el desarrollo y evaluación automática de Competencias Transversales y Objetivos Formativos específicos”, con referencia B29.De Andrés Martínez, D. (2019). Facilitando la autorregulación del aprendizaje en el diseño de sistemas digitales. En IN-RED 2019. V Congreso de Innovación Educativa y Docencia en Red. Editorial Universitat Politècnica de València. 602-616. https://doi.org/10.4995/INRED2019.2019.10430OCS60261

    Gaining confidence on dependability benchmarks conclusions through back-to-back testing

    Full text link
    ©2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The main goal of any benchmark is to guide decisions through system ranking, but surprisingly little research has been focused so far on providing means to gain confidence on the analysis carried out with benchmark results. The inclusion of a back-to-back testing approach in the benchmark analysis process to compare conclusions and gain confidence on the final adopted choices seems convenient to cope with this challenge. The proposal is to look for the coherence of rankings issued from the application of independent multiple-criteria decision making (MCDM) techniques on results. Although any MCDM method can be potentially used, this paper reports our experience using the Logic Score of Preferences (LSP) and the Analytic Hierarchy Process (AHP). Discrepancies in provided rankings invalidate conclusions and must be tracked to discover incoherences and correct the related analysis errors. Once rankings are coherent, the underlying analysis also does, thus increasing our confidence on supplied conclusions.Work partially supported by the Spanish project ARENES (TIN2012-38308-C02-01).Martínez Raga, M.; Andrés Martínez, DD.; Ruiz García, JC. (2014). Gaining confidence on dependability benchmarks conclusions through back-to-back testing. IEEE. doi:10.1109/EDCC.2014.20

    Simulating the effects of logic faults in implementation-level VITAL-compliant models

    Full text link
    [EN] Simulation-based fault injection is a well-known technique to assess the dependability of hardware designs specified using hardware description languages (HDL). Although logic faults are usually introduced in models defined at the register transfer level (RTL), most accurate results can be obtained by considering implementation-level ones, which reflect the actual structure and timing of the circuit. These models consist of a list of interconnected technology-specific components (macrocells), provided by vendors and annotated with post-place-and-route delays. Macrocells described in the very high speed integrated circuit HDL (VHDL) should also comply with the VHDL initiative towards application specific integrated circuit libraries (VITAL) standard to be interoperable across standard simulators. However, the rigid architecture imposed by VITAL makes that fault injection procedures applied at RTL cannot be used straightforwardly. This work identifies a set of generic operations on VITAL-compliant macrocells that are later used to define how to accurately simulate the effects of common logic fault models. The generality of this proposal is supported by the definition of a platform-specific fault procedure based on these operations. Three embedded processors, implemented using the Xilinx¿s toolchain and SIMPRIM library of macrocells, are considered as a case study, which exposes the gap existing between the robustness assessment at both RTL and implementation-level.This work has been partially funded by the Ministerio de Economia, Industria y Competitividad of Spain under grant agreement no TIN2016-81075-R, and the "Programa de Ayudas de Investigacion y Desarrollo" (PAID) of Universitat Politecnica de Valencia.Tuzov, I.; De-Andrés-Martínez, D.; Ruiz, JC. (2019). Simulating the effects of logic faults in implementation-level VITAL-compliant models. Computing. 101(2):77-96. https://doi.org/10.1007/s00607-018-0651-4S77961012Baraza JC, Gracia J, Blanc S, Gil D, Gil P (2008) Enhancement of fault injection techniques based on the modification of vhdl code. IEEE Tran Very Large Scale Integr Syst 16:693–706Baraza JC, Gracia J, Gil D, Gil P (2002) A prototype of a vhdl-based fault injection tool: description and application. Journal of Systems Architecture 47(10):847–867Benites LAC, Kastensmidt FL (2017) Fault injection methodology for single event effects on clock-gated asics. In: IEEE Latin American test symposium. IEEE, pp 1–4Benso A, Prinetto P (2003) Fault injection techniques and tools for VLSI reliability evaluation. Frontiers in electronic testing. Kluwer Academic Publishers, BerlinCobham Gaisler AB: LEON3 processor product sheet (2016). https://www.gaisler.com/doc/leon3_product_sheet.pdfCohen B (2012) VHDL coding styles and methodologies. Springer, New YorkDas SR, Mukherjee S, Petriu EM, Assaf MH, Sahinoglu M, Jone WB (2006) An improved fault simulation approach based on verilog with application to ISCAS benchmark circuits. In: IEEE instrumentation and measurement technology conference, pp 1902–1907Fernandez V, Sanchez P, Garcia M, Villar E (1994) Fault modeling and injection in VITAL descriptions. In: Third annual Atlantic test workshop, pp o1–o4Gil D, Gracia J, Baraza JC, Gil P (2003) Study, comparison and application of different vhdl-based fault injection techniques for the experimental validation of a fault-tolerant system. J Syst Archit 34(1):41–51Gil P, Arlat J, Madeira H, Crouzet Y, Jarboui T, Kanoun K, Marteau T, Duraes J, Vieira M, Gil D, Baraza JC, Gracia J (2002) Fault representativeness. Technical report, dependability benchmarking projectGuthaus MR, Ringenberg JS, Ernst D, Austin TM, Mudge T, Brown RB (2001) MiBench: a free, commercially representative embedded benchmark suite. In: IEEE 4th annual workshop on workload characterization, pp 3–14IEEE Standard for VITAL ASIC (Application Specific Integrated Circuit) (2000) Modeling specification. Institute of Electrical and Electronic Engineers, StandardIEEE Standard VHDL Language Reference Manual (2008) Institute of Electrical and Electronic Engineers, StandardIEEE Standard for Standard Delay Format (SDF) for the Electronic Design Process. Institute of Electrical and Electronic Engineers, Standard (2001)Jenn E, Arlat J, Rimen M, Ohlsson J, Karlsson J (1994) Fault injection into VHDL models: the MEFISTO tool. In: International symposium on fault-tolerant computing, pp 66–75Kochte MA, Schaal M, Wunderlich HJ, Zoellin CG (2010) Efficient fault simulation on many-core processors. In: Design automation conference, pp 380–385Mansour W, Velazco R (2013) An automated seu fault-injection method and tool for HDL-based designs. IEEE Trans Nucl Sci 60(4):2728–2733Mentor Graphics (2016) Questa SIM command reference manual 10.7b, Document Revision 3.5. https://www.mentor.com/products/fv/modelsim/Munden R (2000) Inverter, STDN library. Free model foundry VHDL model list. https://freemodelfoundry.com/fmf_models/stnd/std04.vhdMunden R (2004) ASIC and FPGA verification: a guide to component modeling. Systems on silicon. Elsevier, AmsterdamNa J, Lee D (2011) Simulated fault injection using simulator modification technique. ETRI J 33(1):50–59Nimara S, Amaricai A, Popa M (2015) Sub-threshold cmos circuits reliability assessment using simulated fault injection based on simulator commands. In: IEEE International Symposium on Applied Computational Intelligence and Informatics, pp 101–104Oregano Systems GmbH (2013) MC8051 IP Core, user guide (V 1.2) 2013. http://www.oreganosystems.at/download/mc8051_ug.pdfRomani E (1998) Structural PIC165X microcontroller. Hamburg VHDL archive. https://tams-www.informatik.uni-hamburg.de/vhdlShaw D, Al-Khalili D, Rozon C (2006) Automatic generation of defect injectable VHDL fault models for ASIC standard cell libraries. Integr VLSI J 39(4):382–406Shaw DB, Al-Khalili D (2003) IC bridge fault modeling for IP blocks using neural network-based VHDL saboteurs. IEEE Trans Comput 10:1285–1297Short KL (2008) VHDL for engineers, 1st edn. Pearson, LondonSieh V, Tschache O, Balbach F (1997) Verify: evaluation of reliability using VHDL-models with embedded fault descriptions. In: International symposium on fault-tolerant computing, pp 32–36Singh L, Drucker L (2004) Advanced verification techniques. Frontiers in electronic testing. Springer, New YorkTuzov I, de Andrés D, Ruiz JC (2017) Dependability-aware design space exploration for optimal synthesis parameters tuning. In: IEEE/IFIP international conference on dependable systems and networks, pp 1–12Tuzov I, de Andrés D, Ruiz JC (2017) Robustness assessment via simulation-based fault injection of the implementation level models of the LEON3, MC8051, and PIC microcontrollers in presence of stuck-at, bit-flip, pulse, and delay fault models [Data set], Zenodo. https://doi.org/10.5281/zenodo.891316Tuzov I, de Andrés D, Ruiz JC (2018) DAVOS: EDA toolkit for dependability assessment, verification, optimization and selection of hardware models. In: IEEE/IFIP international conference on dependable systems and networks, pp 322–329Tuzov I, Ruiz JC, de Andrés D (2017) Accurately simulating the effects of faults in VHDL models described at the implementation-level. In: European dependable computing conference, pp 10–17Wang LT, Chang YW, Cheng KT (2009) Electronic design automation: synthesis, verification, and test. Morgan Kaufmann, BurlingtonXilinx: Synthesis and simulation design guide, UG626 (v14.4) (2012). https://www.xilinx.com/support/documentation/sw_manuals/xilinx14_7/sim.pd

    Reversing FPGA Architectures for Speeding up Fault Injection: does it pay?

    Full text link
    [EN] Although initially considered for fast system prototyping, Field Programmable Gate Arrays (FPGAs) are gaining interest for implementing final products thanks to their inherent reconfiguration capabilities. As they are susceptible to soft errors in their configuration memory, the dependability of FPGA-based designs must be accurately evaluated to be used in critical systems. In recent years, research has focused on speeding up fault injection in FPGA-based systems by parallelising experimentation, reducing the injection time, and decreasing the number of experiments. Going a step further requires delving into the FPGA architecture, i.e. precisely determining which components are implementing the considered design (mapping) and which are exercised by the considered workload (profiling). After that, fault injection campaigns can focus on those components actually used to identify critical ones, i.e. those leading the target system to fail. Some manufacturers, like Xilinx, identify those bits in the FPGA configuration memory that may change the implemented design when affected by a soft error. However, their correspondence to particular components of the FPGA fabric and their relationship with the implementation-level model are yet unknown. This paper addresses whether the effort of reversing an FPGA architecture to filter out redundant and unused essential bits pays in terms of experimental time. Since the work of reversing the complete architecture of an FPGA is titanic, as the first step towards this ambitious goal, this paper focuses on those elements in charge of implementing the combinational logic of the design (Look-Up Tables). The experimental results that support this study derive from implementing three soft-core processors on a Zynq SoC FPGA and show the interest of the proposal.Grant PID2020-120271RB-I00 funded by MCIN/AEI/10.13039/501100011033.Tuzov, I.; De-Andrés-Martínez, D.; Ruiz, JC. (2022). Reversing FPGA Architectures for Speeding up Fault Injection: does it pay?. IEEE. 81-88. https://doi.org/10.1109/EDCC57035.2022.00023818

    Increasing the Dependability of VLSI Systems Through Early Detection of Fugacious Faults

    Full text link
    Technology advances provide a myriad of advantages for VLSI systems, but also increase the sensitivity of the combinational logic to different fault profiles. Shorter and shorter faults which up to date had been filtered, named as fugacious faults, require new attention as they are considered a feasible sign of warning prior to potential failures. Despite their increasing impact on modern VLSI systems, such faults are not largely considered today by the safety industry. Their early detection is however critical to enable an early evaluation of potential risks for the system and the subsequent deployment of suitable failure avoidance mechanisms. For instance, the early detection of fugacious faults will provide the necessary means to extend the mission time of a system thanks to the temporal avoidance of aging effects. Because classical detection mechanisms are not suited to cope with such fugacious faults, this paper proposes a method specifically designed to detect and diagnose them. Reported experiments will show the feasibility and interest of the proposal.This work has been funded by the Spanish Ministry of Economy ARENES project (TIN2012-38308-C02—01).Espinosa García, J.; Andrés Martínez, DD.; Gil, P. (2015). Increasing the Dependability of VLSI Systems Through Early Detection of Fugacious Faults. IEEE Computer Society - Conference Publishing Services (CPS). https://doi.org/10.1109/EDCC.2015.13

    From measures to conclusions using Analytic Hierarchy Process in dependability benchmarkind

    Full text link
    © 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Dependability benchmarks are aimed at comparing and selecting alternatives in application domains where faulty conditions are present. However, despite its importance and intrinsic complexity, a rigorous decision process has not been defined yet. As a result, benchmark conclusions may vary from one evaluator to another, and often, that process is vague and hard to follow, or even nonexistent. This situation affects the repeatability and reproducibility of that analysis process, making difficult the cross-comparison of results between works. To mitigate these problems, this paper proposes the integration of the analytic hierarchy process (AHP), a widely used multicriteria decision-making technique, within dependability benchmarks. In addition, an assisted pairwise comparison approach is proposed to automate those aspects of AHP that rely on judgmental comparisons, thus granting consistent, repeatable, and reproducible conclusions. Results from a dependability benchmark for wireless sensor networks are used to illustrate and validate the proposed approach.This work was supported in part by the Spanish Project ARENES under Grant TIN2012-38308-C02-01 and in part by the Programa de Ayudas de Investigacion y Desarrollo through the Universitat Politecnica de Valencia, Valencia, Spain. The Associate Editor coordinating the review process was Dr. Dario Petri.Martínez Raga, M.; Andrés Martínez, DD.; Ruiz García, JC.; Friginal López, J. (2014). From measures to conclusions using Analytic Hierarchy Process in dependability benchmarkind. IEEE Transactions on Instrumentation and Measurement. 63(11):2548-2556. https://doi.org/10.1109/TIM.2014.2348632S25482556631

    Analysis of results in Dependability Benchmarking: Can we do better?"

    Full text link
    ©2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Dependability benchmarking has become through the years more and more important in the process of systems evaluation. The increasing need for making systems more dependable in presence of perturbations has contributed to this fact. Nevertheless, even though many studies have focused on different areas related to dependability benchmarking, and some others have focused on the need of providing these benchmarks with good quality measures, there is still a gap in the process of the analysis of results. This paper focuses on providing a first glance at different approaches that may help filling this gap by making explicit the criteria followed in the decision making process.This work is partially supported by the Spanish project ARENES (TIN2012-38308-C02-01), the ANR French project AMORES (ANR-11-INSE-010), and the Intel Doctoral Student Honour Programme 2012.Martínez, M.; Andrés, DD.; Ruiz García, JC.; Friginal López, J. (2013). Analysis of results in Dependability Benchmarking: Can we do better?". IEEE. https://doi.org/10.1109/IWMN.2013.6663790

    Biogas production from the liquid waste of distilled gin production: Optimization of UASB reactor performance with increasing organic loading rate for co-digestion with swine wastewater

    Get PDF
    This study is the first test that proves high rate anaerobic digestion as an efficient technological process for the treatment of gin spent wash. The gin spent wash was co-digested in UASB reactors with swine wastewater, which provided nutrients and alkalinity. The process was optimized by increasing the proportion of gin spent wash in the feed, and thus the organic loading rate (OLR) up to reactor failure. Stable high- efficiency operation was reached at an OLR as high as 28.5 kg COD m−3 d−1, yielding 8.4m3 CH4 m−3 d−1 and attaining a COD removal of 97.0%. At an organic loading rate of 32.0 kg COD m−3 d−1, the process became unstable and the reactor underwent over-acidification that drastically lowered the pH and suppressed methanogenesis. The failure of the reactor was caused by a combination of an organic overloading and alkalinity deficit that uncoupled acidogenesis and methanogenesis

    Ambient noise in wireless mesh networks: Evaluation and proposal of an adaptive algorithm to mitigate link removal

    Full text link
    [EN] Ambient noise is one of the major problems in Wireless Mesh Networks (WMNs). It is responsible for adverse effects on communications such as packet dropping, which dramatically affects the behaviour of ad hoc routing protocols ,a key element of these networks. This issue is of prime importance for WMNs since the loss of communication links experienced by nodes may strongly increase the convergence time of the network. Furthermore, the dynamic nature of this problem makes it difficult to address with traditional techniques. The contribution of this paper goes in the direction of (i) exploring this problem by assessing the behaviour of three state-of-the-art routing protocols in the presence of ambient noise (OLSR, B.A.T.M.A.N and Babel) and (ii) improving the resilience capabilities of these protocols against ambient noise by proposing an algorithm for the link quality-based adaptive replication of packets, named LARK. The goal of LARK is to avoid the loss of communication links in the presence of high levels of ambient noise. The effectiveness of the proposal is experimentally assessed, thus establishing a new method to reduce the impact of ambient noise on WMNs.This work is partially supported by the Spanish projects ARENES (TIN2012-38308-C02-01) and SEMSECAP (TIN-2009-13825), the ANR French project AMORES (ANR-11-INSE-010), and the Intel Doctoral Student Honour Programme 2012.Friginal López, J.; Ruiz García, JC.; Andrés Martínez, DD.; Bustos Rodríguez, A. (2014). Ambient noise in wireless mesh networks: Evaluation and proposal of an adaptive algorithm to mitigate link removal. Journal of Network and Computer Applications. 41:505-516. https://doi.org/10.1016/j.jnca.2014.02.004S5055164
    corecore